10 research outputs found

    A virtualisation framework for embedded systems

    Get PDF

    An FPGA-based real-time event sampler

    Get PDF
    This paper presents the design and FPGA-implementation of a sampler that is suited for sampling real-time events in embedded systems. Such sampling is useful, for example, to test whether real-time events are handled in time on such systems. By designing and implementing the sampler as a logic analyzer on an FPGA, several design parameters can be explored and easily modified to match the behavior of different kinds of embedded systems. Moreover, the trade-off between price and performance becomes easy, as it mainly exists of choosing the appropriate type and speed grade of an FPGA family

    Random Additive Signature Monitoring for Control Flow Error Detection

    No full text
    Due to harsher working environments, soft errors or erroneous bit-flips occur more frequently in microcontrollers during execution. Without mitigation, such errors result in data corruption and control flow errors. Multiple software-implemented mitigation techniques have already been proposed. In this paper, we evaluate seven signature monitoring techniques in seven different test cases. We measure and compare their detection ratios, execution time overhead, and code size overhead. From the gathered results, we derive five requirements to develop an optimal signature monitoring technique. Based on these requirements, we propose a new signature monitoring technique called random additive signature monitoring (RASM). RASM uses signature updates with random values and optimally placed validity checks to detect interblock control flow errors. RASM has a higher detection ratio, lower execution time overhead, and lower code size overhead than the studied techniques.status: publishe

    Random Additive Control Flow Error Detection

    No full text
    Today, embedded systems are being used in many (safety-critical) applications. However, due to their decreasing feature size and supply voltage, such systems are more susceptible to external disturbances such as electromagnetic interference. These external disturbances are able to introduce bit-flips inside the microcontroller’s hardware. In turn, these bit-flips may also corrupt the software. A possible software corruption is a control flow error. This paper proposes a new software-implemented control flow error detection technique. The advantage of our technique, called Random Additive Control Flow Error Detection, is a high detection ratio with a low execution time overhead. Most control flow errors are detected, while having a lower execution time overhead than the considered existing techniques.status: publishe

    CDFEDT—Comparison of Data Flow Error Detection Techniques in Embedded Systems: an Empirical Study

    No full text
    Embedded systems used in harsh environments are susceptible to bit-flips, which can cause data flow errors. In order to increase the reliability of embedded systems, numerous data flow error detection techniques have already been developed. It is, however, difficult to identify the best technique to apply, due to differences in the way they are evaluated in current literature. This paper presents an empirical comparative study of seven existing techniques. We measured fault coverage, execution time overhead, and code size overhead. We conclude that soft error detection using software redundancy (SEDSR) and error detection by duplicated instructions (EDDI) have a better trade-off between fault coverage and overheads than software-implemented fault tolerance (SWIFT), critical block duplication (CBD), and overhead reduction (VAR3+). Error detection by diverse data and duplicated instructions (ED4I or EDDDDI) and software approach (SA) had better fault coverage at the expense of execution time and code size usage.status: Published onlin

    An Improved Data Error Detection Technique for Dependable Embedded Software

    No full text
    © 2018 IEEE. This paper presents a new software-implemented data error detection technique called Full Duplication and Selective Comparison. Our technique combines the ideas of existing techniques in order to increase the fault detection ratio, decrease the imposed code size and execution time overhead. As the name gives away, we opt to duplicate the entire code base and place comparison instructions in critical basic blocks only. The critical basic blocks are the blocks with two or more incoming edges. We evaluate our technique by implementing it for several case studies and by performing fault injection experiments. Next, we compared the obtained results to the parameters of three established techniques: Error Detection by Diverse Data and Duplicated Instructions, Critical Block Duplication and Software Implemented Fault Tolerance. The results show an average increase of 20.5% in fault detection ratio and an average decrease in code size and execution time overhead of 12.6% and 0.5%, respectively.status: publishe
    corecore